Asynchronous Distributed Optimization via Dual Decomposition and Block Coordinate Subgradient Methods

نویسندگان

چکیده

In this article, we study the problem of minimizing sum potentially nondifferentiable convex cost functions with partially overlapping dependences in an asynchronous manner, where communication network is not coordinated. We behavior algorithm based on dual decomposition and block coordinate subgradient methods under assumptions weaker than those used literature. At same time, allow different agents to use local stepsizes no global coordination. Sufficient conditions are provided for almost sure convergence solution optimization problem. Under additional assumptions, establish a sublinear rate that, turn, can be strengthened linear if strongly has Lipschitz gradients. also extend available results literature by allowing multiple blocks updated at time nonuniform time-varying probabilities assigned blocks. A numerical example illustrate effectiveness algorithm.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization

Machine learning with big data often involves large optimization models. For distributed optimization over a cluster ofmachines, frequent communication and synchronization of allmodel parameters (optimization variables) can be very costly. A promising solution is to use parameter servers to store different subsets of the model parameters, and update them asynchronously at different machines usi...

متن کامل

Distributed Multi-Agent Optimization via Dual Decomposition

In this master thesis, a new distributed multi-agent optimization algorithm is introduced. The algorithm is based upon the dual decomposition of the optimization problem, together with the subgradient method for finding the optimal dual solution. The convergence of the new optimization algorithm is proved for communication networks with bounded timevarying delays, and noisy communication. Furth...

متن کامل

Asynchronous block-iterative primal-dual decomposition methods for monotone inclusions

We propose new primal-dual decomposition algorithms for solving systems of inclusions involving sums of linearly composed maximally monotone operators. The principal innovation in these algorithms is that they are block-iterative in the sense that, at each iteration, only a subset of the monotone operators needs to be processed, as opposed to all operators as in established methods. Determinist...

متن کامل

Distributed Asynchronous Dual-Free Stochastic Dual Coordinate Ascent

In this paper, we propose a new Distributed Asynchronous Dual-Free Coordinate Ascent method (dis-dfSDCA), and prove that it has linear convergence rate in convex case. Stochastic Dual Coordinate Ascent (SDCA) is a popular method in solving regularized convex loss minimization problems. Dual-Free Stochastic Dual Coordinate Ascent (dfSDCA) method is a variation of SDCA, and can be applied to a mo...

متن کامل

Randomized Block Subgradient Methods for Convex Nonsmooth and Stochastic Optimization

Block coordinate descent methods and stochastic subgradient methods have been extensively studied in optimization and machine learning. By combining randomized block sampling with stochastic subgradient methods based on dual averaging ([22, 36]), we present stochastic block dual averaging (SBDA)—a novel class of block subgradient methods for convex nonsmooth and stochastic optimization. SBDA re...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Control of Network Systems

سال: 2021

ISSN: ['2325-5870', '2372-2533']

DOI: https://doi.org/10.1109/tcns.2021.3065644